yudkowsky and soare
If Anyone Builds it, Everyone Dies review – how AI could kill us all
W hat if I told you I could stop you worrying about climate change, and all you had to do was read one book? Great, you'd say, until I mentioned that the reason you'd stop worrying was because the book says our species only has a few years before it's wiped out by superintelligent AI anyway. We don't know what form this extinction will take exactly - perhaps an energy-hungry AI will let the millions of fusion power stations it has built run hot, boiling the oceans. Maybe it will want to reconfigure the atoms in our bodies into something more useful. There are many possibilities, almost all of them bad, say Eliezer Yudkowsky and Nate Soares in If Anyone Builds It, Everyone Dies, and who knows which will come true.
- North America > United States (0.17)
- Oceania > Australia (0.05)
- North America > Mexico (0.05)
- Europe > Ukraine > Kyiv Oblast > Chernobyl (0.05)
- Government (1.00)
- Leisure & Entertainment > Sports (0.71)
- Health & Medicine (0.67)
No, AI isn't going to kill us all, despite what this new book says
No, AI isn't going to kill us all, despite what this new book says In the totality of human existence, there are an awful lot of things for us to worry about. Money troubles, climate change and finding love and happiness rank highly on the list for many people, but for a dedicated few, one concern rises above all else: that artificial intelligence will eventually destroy the human race. Eliezer Yudkowsky at the Machine Intelligence Research Institute (MIRI) in California has been proselytising this cause for a quarter of a century, to a small if dedicated following. Then we entered the ChatGPT era, and his ideas on AI safety were thrust into the mainstream, echoed by tech CEOs and politicians alike. Writing with Nate Soares, also at MIRI, is Yudkowsky's attempt to distil his argument into a simple, easily digestible message that will be picked up across society.